Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Knowledge graph embedding model based on improved Inception structure
Xiaopeng YU, Ruhan HE, Jin HUANG, Junjie ZHANG, Xinrong HU
Journal of Computer Applications    2022, 42 (4): 1065-1071.   DOI: 10.11772/j.issn.1001-9081.2021071265
Abstract471)   HTML29)    PDF (570KB)(166)       Save

KGE(Knowledge Graph Embedding) maps entities and relationships into a low-dimensional continuous vector space, uses machine learning methods to implement relational data applications, such as knowledge analysis, reasoning, and completion. Taking ConvE (Convolution Embedding) as a representative, CNN (Convolutional Neural Network) is applied to knowledge graph embedding to capture the interactive information of entities and relationships, but the ability of the standard convolutional to capture feature interaction information is insufficient, and its feature expression ability is low. Aiming at the problem of insufficient feature interaction ability, an improved Inception structure was proposed, based on which a knowledge graph embedding model named InceE was constructed. Firstly, hybrid dilated convolution replaced standard convolution to improve the ability to capture feature interaction information. Secondly, the residual network structure was used to reduce the loss of feature information. The experiments were carried out on the datasets Kinship, FB15k, WN18 to verify the effectiveness of link prediction by InceE. Compared with ArcE and QuatRE models on the Kinship and FB15k datasets, the Hit@1 of InceE increased by 1.6 and 1.5 percentage points; compared with ConvE on the three datasets, the Hit@1 of InceE increased by 6.3, 20.8, and 1.0 percentage points. The experimental results show that InceE has a stronger ability to capture feature interactive information.

Table and Figures | Reference | Related Articles | Metrics
Joint optimization of user association and resource allocation in cognitive radio ultra-dense networks to improve genetic algorithm
Junjie ZHANG, Runhe QIU
Journal of Computer Applications    2022, 42 (12): 3856-3862.   DOI: 10.11772/j.issn.1001-9081.2021101777
Abstract220)   HTML7)    PDF (2848KB)(48)       Save

Aiming at the multi-dimensional resource allocation problem in the downlink heterogeneous cognitive radio Ultra-Dense Network (UDN), an improved genetic algorithm was proposed to jointly optimize user association and resource allocation with the objective of maximizing the throughput of femtocell users. Firstly, preprocessing was performed before the algorithm running to initialize the user’s reachable base stations and available channels matrix. Secondly, symbol coding was used to encode the matching relationships between the user and the base stations as well as the user and the channels into a two-dimensional chromosome. Thirdly, dynamic choosing best for replication + roulette was used as the selection algorithm to speed up the convergence of the population. Finally, in order to avoid the algorithm from falling into the local optimum, the mutation operator of premature judgment was added in the mutation stage, so that the connection strategy of base station, user and channel was obtained with limited number of iterations. Experimental results show that when the numbers of base stations and channels are fixed, the proposed algorithm improves the total user throughput by 7.2% and improves the cognitive user throughput by 1.2% compared with the genetic algorithm of three-dimensional matching, and the computational complexity of the proposed algorithm is lower. The proposed algorithm reduces the search space of feasible solutions, and can effectively improve the total throughput of cognitive radio UDNs with lower complexity.

Table and Figures | Reference | Related Articles | Metrics
Computation offloading and resource allocation strategy in NOMA-based 5G ultra-dense network
Yongpeng SHI, Junjie ZHANG, Yujie XIA, Ya GAO, Shangwei ZHANG
Journal of Computer Applications    2021, 41 (11): 3319-3324.   DOI: 10.11772/j.issn.1001-9081.2021020214
Abstract291)   HTML9)    PDF (639KB)(117)       Save

A Non-Orthogonal Multiple Access (NOMA) based computation offloading and bandwidth allocation strategy was presented to address the issues of insufficient computing capacity of mobile devices and limited spectrum resource in 5G ultra-dense network. Firstly, the system model was analyzed, on this basis, the research problem was defined formally with the objective of minimizing the computation cost of devices. Then, this problem was decomposed into three sub-problems: device computation offloading, system bandwidth allocation, and device grouping and matching, which were solved by adopting simulated annealing, interior point method, and greedy algorithm. Finally, a joint optimization algorithm was used to alternately solve the above sub-problems, and the optimal computation offloading and bandwidth allocation strategy was obtained. Simulation results show that, the proposed joint optimization strategy is superior to the traditional Orthogonal Multiple Access (OMA), and can achieve lower device computation cost compared to NOMA technology with average bandwidth allocation.

Table and Figures | Reference | Related Articles | Metrics
New blind signature scheme without trusted private key generator
HE Junjie ZHANG Fan QI Chuanda
Journal of Computer Applications    2013, 33 (04): 1061-1064.   DOI: 10.3724/SP.J.1087.2013.01061
Abstract735)      PDF (794KB)(491)       Save
In order to eliminate the inherent key escrow problem in identity-based public key cryptosystem, a new identity-based blind signature scheme without trusted Private Key Generator (PKG) was proposed. Under the random oracle model, the scheme was proved to be existentially unforgeable against adaptive chosen message and identity attacks from common attackers or semi-honest PKG, and the security was reduced to computational Diffie-Hellman assumption. For the forgery attacks from the malicious PKG, the legitimate signer can prove to the arbitration institution that the signature is forged by trace algorithm.
Reference | Related Articles | Metrics